Maximum Entropy Spectral Modeling Approach to Mesopic Tone Mapping

نویسندگان

  • Mehdi Rezagholizadeh
  • James J. Clark
چکیده

Tone mapping algorithms should be informed by accurate color appearance models (CAM) in order that the perceptual fidelity of the rendering is maintained by the tone mapping transformations. Current tone mapping techniques, however, suffer from a lack of good color appearance models for mesopic conditions. There are only a few currently available appearance models suited to the mesopic range, none of which perform very well. In this paper, we evaluate some of the most prominent models available for mesopic and scotopic vision and, in particular, we focus on the iCAM06 model as one of the best-known tone reproduction techniques. We introduce a spectral-based color appearance model for mesopic conditions which can be incorporated in tone reproduction methods. Based on the maximum entropy spectral modeling approach of Clark and Skaff [1], this is a powerful color appearance model which can predict the color appearance under mesopic conditions as well as under photopic conditions. Our model incorporates the CIE system for mesopic photometry, leading to increased accuracy of color appearance model. At low (mesopic) light levels two factors come into play as compared with high light level (photopic) spectral modeling. The first is that image noise becomes significant. The Clark and Skaff model treats the noise as an inherent part of the modeling process, and an estimate of the noise level sets the tradeoff between the consistency of the solution with the measurements and the spectral smoothing imposed by the maximum entropy constraint. The second factor in mesopic vision is that both the rod and the cone systems are active, requiring a modification to the sensor model. The relative contribution of the rod and cone systems is dependent on the overall light level in this regime, and our approach is adaptive in this sense. We present several experiments comparing the performance of our tone mapping approach with that of the existing methods, showing that the proposed method works very well in this regard, and also demonstrates the potential of our model to become a part of wide-range tone mapping systems. Introduction Our visual system is able to deal with huge absolute levels of light from bright sunny day to star lit scenes; moreover, our eye can perceive high dynamic range of luminance (around 4 orders of magnitude) simultaneously without losing clarity. Now, it is known that our eyes have different sensitivities under different lighting conditions, less sensitive in bright scenes comparing to dark ones. Adjusting the sensitivity is done partly by changing the pupil size and the rest are compensated by the cone and rod photoreceptors undergo adaptation mechanisms. However, cameras can not handle high dynamic range scenes as easy as our eyes do. Problems arise with capturing scenes of this kind where the image sensor face with over and under exposed regions in the images. One possible solution to avoid this problem is introduced by Debevec and Malik [2] who suggest imaging with multiple exposures and then propose a technique for combining them together. Currently available CCD or CMOS image sensors are capable of capturing wide range of luminance; however, most of existing displays are not able to display more than two orders of magnitude. Hence, most of the cameras deliver an 8-bit image which can fit to the available dynamic range of displays. Therefore, we can expect that displays be incapable of rendering high dynamic range (HDR) images known as HDR display problem. Tone mapping is a solution to this problem trying to map the high dynamic range image intensities to the low dynamic range display outputs in a way that reproduced image perceptually matches the original scene. Several tone mapping techniques have been proposed, among them we can refer to multi-scale model of Pattanaik et al. [3], perceptually based tone mapping of Irawan et al. [4], and iCAM06 tone reproduction technique [5]. A complete review of the available tone mapping operators can be found in [6]. Current tone mapping techniques and color appearance models (CAM) are trying to solve different problems; however, as Erik Reinhard states in [7], these two are two sides of the same coin (i.e. tone reproduction algorithms and color appearance models should unify to predict the correct appearance of images with a wide range of intensities). In the CAM side, abundant number of models are available such as: the Nayatani et al. model, the Hunt model, RLAB model, CIECAM97 and CIECAM02 models; most of which are explained in details in [8]. However, none of them are appropriate to be used in tone mapping algorithms, and among them, works that focus on the mesopic vision appearance is not too much. We can say that the currunt tone mapping techniques suffer from a lack of suitable color appearance model for mesopic vision. In this work, we are going to discuss some of the well-known mesopic vision models currently available in the literature. Then, we propose a spectral model accounting for the rod-cone interaction in the mesopic conditions. All of the mentioned models in this work are implemented, evaluated and compared to each other. One of the main purposes of this study is to illustrate the weaknesses and strengths of mostly known mesopic models and analysing their similarities or distinctnesses. Furthermore, this work aims at investigating the quality of tone mapping techniques (especially iCAM06) in reproducing mesopic scenes. We hope that the proposed spectral color appearance modeling for mesopic vision proposed in this work opens a way towards filling the gaps between the tone reproduction techniques and the color appearance models. Models for Mesopic Vison: Physiological Background Our visual system consists of several layers working in parallel to transport the light’s excitation to the visual cortex in the brain being responsible for interpreting the visual inputs to the eye, so-called visual perception. The light falls on the retina and stimulates four types of photoreceptors: the rods and long, medium and short wavelength sensitive cones. The output of a rod cell is connected to the rod bipolar cell and the cone photoreceptor is connected to the on-bipolar and off-bipolar cells. The outputs of the bipolar cells are then transmitted to the ganglion cells which form the optic nerve. There are three types of ganglion cells working in parallel which constitute the parvocellular pathway (PC pathway) corresponding to the red/green opponency, magnocellular pathway (MC pathway) corresponding to achromatic signal , and koniocellular pathway (KC pathway) corresponding to the blue/yellow opponency [9]. These three pathways are carrying the visual information to the higher levels in the visual system. In the photopic condition, rod cells totally bleach and only cones are sensitive to the lights greater than 5 cd/m2. In the mesopic conditions, a gap junction forms between rod and cone bipolar cells [10]. Hence, rods may contribute to all three pathways through the gap junction. In the scotopic condition, since the light level is under the cone sensitivity threshold, there is no cone contribution to the pathways. However, rod photoreceptors are very sensitive to light such that they can capture even a single photon in a dark situation and amplify it to a perceivable response. Therefore, in the mesopic condition, rod cells are involved in our vision by sharing their response with cone cells. Modeling Blue Shift in Moonlit Scenes The first algorithm we are considering is proposed by Khan and Pattanaik [10]. Their work aims at modeling the ‘Blue Shift’ in the dark scenes. Recent findings show that rod cells contribute to off-bipolar cells during the scotopic condition by forming chemical synapses. Based on this theory, to explain the blue shift, authors hypothesize that these synapses are just established between the rod and short type cones. They propose taking the following steps to calculate the RGB response with blue shift. 1. Given the RGB response, the scotopic luminance values, Irod , are obtained and the adaptation intensity is set to 0.03 cd/m2. 2. For each pixel, the scotopic luminance is plugged in to the Hunt model introduced for predicting the photoreceptor response to the light intensity I and the rod response values Rrod is calculated. 3. Cone response values, Rl ,Rm,Rs, are assumed zero, since cone cells do not respond in the scotopic condition. 4. The final scotopic image is obtained by adding 20% of the rod response to the S-cone signal and then projecting the result back into the initial RGB space. Rs = Rs +0.2Rrod (1) The way that the authors address the blue shift turns out to be adding some blue to the initial image and the output of this algorithm does not look natural and realistic. Cao Model of Mesopic Vision Cao et al. proposed a model for mesopic vision based on the experiments they have conducted [11]. The results imply that the rod contributions to the PC, MC, and KC pathways linearly relate to rod contrast. The model is fitted to the experimental data to obtain the parameters. Kirk and O’Brien established a perceptually based tone mapping method accounting for mesopic conditions based on the Cao model [12]. Cao model can be summarized in three fundamental steps. (We keep the same notations as [12]). 1. Rod responses are involved in setting three regulators: gL,gM , and gS. gL = 1/(1+0.33(qL +κ1qRod)) gM = 1/(1+0.33(qM +κ1qRod)) gS = 1/(1+0.33(qS +κ2qRod)) (2) where κ1 is a coefficient which adjust the correct proportion of rod to cone response, and qi, i ∈ {L,S,M} represent the cone responses. These three regulators will determine the amount of the color shift in the opponent color model. 2. Regulators and rod response determine the amount of shift in each opponent channel using the following formulas: ∆oR/G = xκ1(ρ1 gM mmax −ρ2 gL lmax )qRod ∆oB/Y = y(ρ3 gS smax −ρ4W )qRod ∆oLuminance = zW qRod W = (α gL lmax +(1−α) gM mmax ) (3) where x,y, and z are free tuning coefficients; lmax = 0.637, mmax = 0.392, and smax = 1.606 are the maximum values of cone fundamentals [12]; and ρ and α are fitting parameters set as: ρ1 = 1.111, ρ2 = 0.939, ρ3 = 0.4, ρ4 = 0.15 and α = 0.619. W is a positive value which can be used as a measure of mesopic level where W = 0 indicates the fully photopic condition. It is worth mentioning that the color shifts are nonlinear functions of gis but linear functions of rod response. 3. The shifted cone responses which have accounted for mesopic color appearance effects are introduced as a linear combination of cone responses and calculated color opponent shift components. q̂ = [qL qM qS] T +∆q̂ ∆q̂ = A−1∆o (4) where A is the transformation matrix between the opponent color space and the corresponding shifted cone response. oR/G = q̂M − q̂L oB/Y = q̂S − (q̂L − q̂M) oLuminance = q̂L + q̂M (5) iCAM06 Tone Compression Model for Mesopic Vision As we mentioned before, iCAM06 tone mapping technique accounts for mesopic conditions by including the rod response in its tone compression operator [5]. Keeping the same notation as the original article, we can summarize the model as follows. 1. The chromatic adapted image is input to the tone compression unit and in the first step, this input image is converted to the Hunt-Pointer-Estevez space. Then the cone responses are obtained using the cone response functions introduced by Hunt. Ra = 400(FLR/Yw) 27.13+(FLR/Yw) +0.1 Ga = 400(FLG/Yw) 27.13+(FLG/Yw) +0.1 Ba = 400(FLG/Yw) 27.13+(FLG/Yw) +0.1 FL = 0.2k (5LA)+0.1(1−K )(5LA) 1/3

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Spatial Simulation and Land-subsidence Susceptibility Mapping Using Maximum Entropy Model

The aim of this research is spatial Simulation and land subsidence susceptibility mapping using maximum entropy model in Jiroft and Anbarabad Townships. At first, land subsidence locations were recognized using extensive field surveys and subsequently the land subsidence distribution map was made in the geographic information system. Then, each of effective factors on land subsidence occurred i...

متن کامل

Groundwater Potential Mapping using Index of Entropy and Naïve Bayes Models at Ardabil Plain

Although groundwater resources have long been selected as a safe choice for resolving human water requirements, overexploitation of them, especially at Ardabil plain, has promoted a decrease in the quality and quantity of these resources. One of the significant solutions is to identification of the groundwater potential zones and exploitation of them according to their potentials. The aim of th...

متن کامل

Maximum flatness spectral modeling

The maximum entropy method obtains the flattest spectrum consistent with the given autocorrelation values, according to a specific flatness measure. In this correspondence, the suitability of the maximum flatness criterion for spectral estimation is discussed, concluding that it offers useful insight into the spectral models arising from the optimization approach.

متن کامل

Development of Visual Performance Based Mesopic Photometry

This work started by investigating the applicability of the photopic V( ) function to predict visual task performance at mesopic light levels. Visual acuity and pedestrian visibility experiments were carried out in varied lighting and viewing conditions. The results indicated the inadequacy of photopic photometry to characterise the response of peripheral vision at low mesopic light levels. The...

متن کامل

Modeling the Learning from Repeated Samples: a Generalized Cross Entropy Approach

In this study we illustrate a Maximum Entropy (ME) methodology for modeling incomplete information and learning from repeated samples. The basis for this method has its roots in information theory and builds on the classical maximum entropy work of Janes (1957). We illustrate the use of this approach, describe how to impose restrictions on the estimator, and how to examine the sensitivity of ME...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2013